ℓ0 Sparsifying Transform Learning with Efficient Optimal Updates and Convergence Guarantees

نویسندگان

  • Saiprasad Ravishankar
  • Yoram Bresler
چکیده

Many applications in signal processing benefit from the sparsity of signals in a certain transform domain or dictionary. Synthesis sparsifying dictionaries that are directly adapted to data have been popular in applications such as image denoising, inpainting, and medical image reconstruction. In this work, we focus instead on the sparsifying transform model, and study the learning of well-conditioned square sparsifying transforms. The proposed algorithms alternate between a l0 “norm”-based sparse coding step, and a non-convex transform update step. We derive the exact analytical solution for each of these steps. The proposed solution for the transform update step achieves the global minimum in that step, and also provides speedups over iterative solutions involving conjugate gradients. We establish that our alternating algorithms are globally convergent to the set of local minimizers of the non-convex transform learning problems. In practice, the algorithms are insensitive to initialization. We present results illustrating the promising performance and significant speed-ups of transform learning over synthesis K-SVD in image denoising.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

FRIST - Flipping and Rotation Invariant Sparsifying Transform Learning and Applications

Features based on sparse representation, especially using the synthesis dictionary model, have been heavily exploited in signal processing and computer vision. However, synthesis dictionary learning typically involves NP-hard sparse coding and expensive learning steps. Recently, sparsifying transform learning received interest for its cheap computation and its optimal updates in the alternating...

متن کامل

Optimal Updates and Convergence Guarantees

Many applications in signal processing benefit from the sparsity of signals in a certain transform domain or dictionary. Synthesis sparsifying dictionaries that are directly adapted to data have been popular in applications such as image denoising, inpainting, and medical image reconstruction. In this work, we focus instead on the sparsifying transform model, and study the learning of well-cond...

متن کامل

Efficient Blind Compressed Sensing Using Sparsifying Transforms with Convergence Guarantees and Application to MRI

Natural signals and images are well-known to be approximately sparse in transform domains such as Wavelets and DCT. This property has been heavily exploited in various applications in image processing and medical imaging. Compressed sensing exploits the sparsity of images or image patches in a transform domain or synthesis dictionary to reconstruct images from undersampled measurements. In this...

متن کامل

VIDOSAT: High-dimensional Sparsifying Transform Learning for Online Video Denoising

Techniques exploiting the sparsity of images in a transform domain have been effective for various applications in image and video processing. Transform learning methods involve cheap computations and have been demonstrated to perform well in applications such as image denoising and medical image reconstruction. Recently, we proposed methods for online learning of sparsifying transforms from st...

متن کامل

Closed-Form Optimal Updates in Transform Learning

I. TRANSFORM LEARNING While the idea of learning a synthesis [1] or analysis [2], [3] dictionary for sparse signal representation has received recent attention, these formulations are typically non-convex and NP-hard, and the approximate algorithms are still computationally expensive. In this work, we focus instead on the learning of square sparsifying transforms W ∈ Rn×n, and develop efficient...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • IEEE Trans. Signal Processing

دوره 63  شماره 

صفحات  -

تاریخ انتشار 2015